91 research outputs found
Bounded Concurrent Timestamp Systems Using Vector Clocks
Shared registers are basic objects used as communication mediums in
asynchronous concurrent computation. A concurrent timestamp system is a higher
typed communication object, and has been shown to be a powerful tool to solve
many concurrency control problems. It has turned out to be possible to
construct such higher typed objects from primitive lower typed ones. The next
step is to find efficient constructions. We propose a very efficient wait-free
construction of bounded concurrent timestamp systems from 1-writer multireader
registers. This finalizes, corrects, and extends, a preliminary bounded
multiwriter construction proposed by the second author in 1986. That work
partially initiated the current interest in wait-free concurrent objects, and
introduced a notion of discrete vector clocks in distributed algorithms.Comment: LaTeX source, 35 pages; To apper in: J. Assoc. Comp. Mac
Shannon Information and Kolmogorov Complexity
We compare the elementary theories of Shannon information and Kolmogorov
complexity, the extent to which they have a common purpose, and where they are
fundamentally different. We discuss and relate the basic notions of both
theories: Shannon entropy versus Kolmogorov complexity, the relation of both to
universal coding, Shannon mutual information versus Kolmogorov (`algorithmic')
mutual information, probabilistic sufficient statistic versus algorithmic
sufficient statistic (related to lossy compression in the Shannon theory versus
meaningful information in the Kolmogorov theory), and rate distortion theory
versus Kolmogorov's structure function. Part of the material has appeared in
print before, scattered through various publications, but this is the first
comprehensive systematic comparison. The last mentioned relations are new.Comment: Survey, LaTeX 54 pages, 3 figures, Submitted to IEEE Trans
Information Theor
Randomized Two-Process Wait-Free Test-and-Set
We present the first explicit, and currently simplest, randomized algorithm
for 2-process wait-free test-and-set. It is implemented with two 4-valued
single writer single reader atomic variables. A test-and-set takes at most 11
expected elementary steps, while a reset takes exactly 1 elementary step. Based
on a finite-state analysis, the proofs of correctness and expected length are
compressed into one table.Comment: 9 pages, 4 figures, LaTeX source; Submitte
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
The relationship between the Bayesian approach and the minimum description
length approach is established. We sharpen and clarify the general modeling
principles MDL and MML, abstracted as the ideal MDL principle and defined from
Bayes's rule by means of Kolmogorov complexity. The basic condition under which
the ideal principle should be applied is encapsulated as the Fundamental
Inequality, which in broad terms states that the principle is valid when the
data are random, relative to every contemplated hypothesis and also these
hypotheses are random relative to the (universal) prior. Basically, the ideal
principle states that the prior probability associated with the hypothesis
should be given by the algorithmic universal probability, and the sum of the
log universal probability of the model plus the log of the probability of the
data given the model should be minimized. If we restrict the model class to the
finite sets then application of the ideal principle turns into Kolmogorov's
minimal sufficient statistic. In general we show that data compression is
almost always the best strategy, both in hypothesis identification and
prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor
Reversibility and Adiabatic Computation: Trading Time and Space for Energy
Future miniaturization and mobilization of computing devices requires energy
parsimonious `adiabatic' computation. This is contingent on logical
reversibility of computation. An example is the idea of quantum computations
which are reversible except for the irreversible observation steps. We propose
to study quantitatively the exchange of computational resources like time and
space for irreversibility in computations. Reversible simulations of
irreversible computations are memory intensive. Such (polynomial time)
simulations are analysed here in terms of `reversible' pebble games. We show
that Bennett's pebbling strategy uses least additional space for the greatest
number of simulated steps. We derive a trade-off for storage space versus
irreversible erasure. Next we consider reversible computation itself. An
alternative proof is provided for the precise expression of the ultimate
irreversibility cost of an otherwise reversible computation without
restrictions on time and space use. A time-irreversibility trade-off hierarchy
in the exponential time region is exhibited. Finally, extreme
time-irreversibility trade-offs for reversible computations in the thoroughly
unrealistic range of computable versus noncomputable time-bounds are given.Comment: 30 pages, Latex. Lemma 2.3 should be replaced by the slightly better
``There is a winning strategy with pebbles and erasures for
pebble games with , for all '' with appropriate
further changes (as pointed out by Wim van Dam). This and further work on
reversible simulations as in Section 2 appears in quant-ph/970300
- …